线性时间不变的状态空间模型(SSM)是工程和统计数据的经典模型,最近通过结构化状态空间序列模型(S4)证明,在机器学习中非常有前途。 S4的核心成分涉及将SSM状态矩阵初始化为称为HIPPO矩阵的特定矩阵,这对于S4处理长序列的能力在经验上很重要。但是,S4使用的特定矩阵实际上是在特定时间变化的动态系统中得出的,并且将此矩阵用作时间不变的SSM没有已知的数学解释。因此,S4模拟远程依赖性的理论机制实际上仍无法解释。我们得出了河马框架的更一般和直观的公式,该框架将S4作为对指数型的Legendre多项式的分解提供了简单的数学解释,解释了其捕获长依赖性的能力。我们的概括引入了理论上丰富的SSM类,还使我们能够为其他碱基(例如傅立叶基础)得出更直观的S4变体,并解释了训练S4的其他方面,例如如何初始化重要的时间表参数。这些见解将S4的性能提高到远程竞技场基准的86%,在最困难的Path-X任务中,S4的性能为96%。
translated by 谷歌翻译
In post-covid19 world, radio frequency (RF)-based non-contact methods, e.g., software-defined radios (SDR)-based methods have emerged as promising candidates for intelligent remote sensing of human vitals, and could help in containment of contagious viruses like covid19. To this end, this work utilizes the universal software radio peripherals (USRP)-based SDRs along with classical machine learning (ML) methods to design a non-contact method to monitor different breathing abnormalities. Under our proposed method, a subject rests his/her hand on a table in between the transmit and receive antennas, while an orthogonal frequency division multiplexing (OFDM) signal passes through the hand. Subsequently, the receiver extracts the channel frequency response (basically, fine-grained wireless channel state information), and feeds it to various ML algorithms which eventually classify between different breathing abnormalities. Among all classifiers, linear SVM classifier resulted in a maximum accuracy of 88.1\%. To train the ML classifiers in a supervised manner, data was collected by doing real-time experiments on 4 subjects in a lab environment. For label generation purpose, the breathing of the subjects was classified into three classes: normal, fast, and slow breathing. Furthermore, in addition to our proposed method (where only a hand is exposed to RF signals), we also implemented and tested the state-of-the-art method (where full chest is exposed to RF radiation). The performance comparison of the two methods reveals a trade-off, i.e., the accuracy of our proposed method is slightly inferior but our method results in minimal body exposure to RF radiation, compared to the benchmark method.
translated by 谷歌翻译
Sentence simplification aims at making the structure of text easier to read and understand while maintaining its original meaning. This can be helpful for people with disabilities, new language learners, or those with low literacy. Simplification often involves removing difficult words and rephrasing the sentence. Previous research have focused on tackling this task by either using external linguistic databases for simplification or by using control tokens for desired fine-tuning of sentences. However, in this paper we purely use pre-trained transformer models. We experiment with a combination of GPT-2 and BERT models, achieving the best SARI score of 46.80 on the Mechanical Turk dataset, which is significantly better than previous state-of-the-art results. The code can be found at https://github.com/amanbasu/sentence-simplification.
translated by 谷歌翻译
Modern deep learning models are over-parameterized, where the optimization setup strongly affects the generalization performance. A key element of reliable optimization for these systems is the modification of the loss function. Sharpness-Aware Minimization (SAM) modifies the underlying loss function to guide descent methods towards flatter minima, which arguably have better generalization abilities. In this paper, we focus on a variant of SAM known as mSAM, which, during training, averages the updates generated by adversarial perturbations across several disjoint shards of a mini-batch. Recent work suggests that mSAM can outperform SAM in terms of test accuracy. However, a comprehensive empirical study of mSAM is missing from the literature -- previous results have mostly been limited to specific architectures and datasets. To that end, this paper presents a thorough empirical evaluation of mSAM on various tasks and datasets. We provide a flexible implementation of mSAM and compare the generalization performance of mSAM to the performance of SAM and vanilla training on different image classification and natural language processing tasks. We also conduct careful experiments to understand the computational cost of training with mSAM, its sensitivity to hyperparameters and its correlation with the flatness of the loss landscape. Our analysis reveals that mSAM yields superior generalization performance and flatter minima, compared to SAM, across a wide range of tasks without significantly increasing computational costs.
translated by 谷歌翻译
Accurate segmentation of live cell images has broad applications in clinical and research contexts. Deep learning methods have been able to perform cell segmentations with high accuracy; however developing machine learning models to do this requires access to high fidelity images of live cells. This is often not available due to resource constraints like limited accessibility to high performance microscopes or due to the nature of the studied organisms. Segmentation on low resolution images of live cells is a difficult task. This paper proposes a method to perform live cell segmentation with low resolution images by performing super-resolution as a pre-processing step in the segmentation pipeline.
translated by 谷歌翻译
ML-based motion planning is a promising approach to produce agents that exhibit complex behaviors, and automatically adapt to novel environments. In the context of autonomous driving, it is common to treat all available training data equally. However, this approach produces agents that do not perform robustly in safety-critical settings, an issue that cannot be addressed by simply adding more data to the training set - we show that an agent trained using only a 10% subset of the data performs just as well as an agent trained on the entire dataset. We present a method to predict the inherent difficulty of a driving situation given data collected from a fleet of autonomous vehicles deployed on public roads. We then demonstrate that this difficulty score can be used in a zero-shot transfer to generate curricula for an imitation-learning based planning agent. Compared to training on the entire unbiased training dataset, we show that prioritizing difficult driving scenarios both reduces collisions by 15% and increases route adherence by 14% in closed-loop evaluation, all while using only 10% of the training data.
translated by 谷歌翻译
Identification of named entities from legal texts is an essential building block for developing other legal Artificial Intelligence applications. Named Entities in legal texts are slightly different and more fine-grained than commonly used named entities like Person, Organization, Location etc. In this paper, we introduce a new corpus of 46545 annotated legal named entities mapped to 14 legal entity types. The Baseline model for extracting legal named entities from judgment text is also developed.
translated by 谷歌翻译
We address the general task of structured commonsense reasoning: given a natural language input, the goal is to generate a graph such as an event -- or a reasoning-graph. To employ large language models (LMs) for this task, existing approaches ``serialize'' the output graph as a flat list of nodes and edges. Although feasible, these serialized graphs strongly deviate from the natural language corpora that LMs were pre-trained on, hindering LMs from generating them correctly. In this paper, we show that when we instead frame structured commonsense reasoning tasks as code generation tasks, pre-trained LMs of code are better structured commonsense reasoners than LMs of natural language, even when the downstream task does not involve source code at all. We demonstrate our approach across three diverse structured commonsense reasoning tasks. In all these natural language tasks, we show that using our approach, a code generation LM (CODEX) outperforms natural-LMs that are fine-tuned on the target task (e.g., T5) and other strong LMs such as GPT-3 in the few-shot setting.
translated by 谷歌翻译
坐标测量机(CMM)一直是测量近50年或更长时间以上固体物体的准确性的基准。然而,随着3D扫描技术的出现,产生的点云的准确性和密度已接管。在这个项目中,我们不仅比较可在3D扫描软件中使用的不同算法,而且还比较了从相机和投影仪等现成组件中创建自己的3D扫描仪。我们的目标是:1。为3D扫描仪开发一个原型,以实现在对象的广泛类型上以最佳精度执行的系统。2.使用现成的组件最小化成本。3.到达非常接近CMM的准确性。
translated by 谷歌翻译
推理是人类认知和智力的关键支柱。在过去的十年中,我们目睹了自然语言处理的巨大收益和大型语言模型的前所未有的缩放。最近的工作表征了很少射击技术的能力,例如思想链,可以在大语言模型中模仿人类的推理。这个标志性的功能很少,连同不断扩展的语言模型相结合,打开了解决各种任务的可能性的远景,例如数学单词问题,代码完成和常识性推理。促使思想链(COT)通过提供中间步骤并敦促模型遵循相同的过程,从而进一步推动了模型的性能。尽管具有令人信服的性能,但在这些模型中推理能力的起源却很少探索。这项工作启动了对大语言模型中推理机制的更深入了解的初步步骤。我们的工作围绕查询模型,同时在提示中控制除一个组件以外的所有组件外:符号,模式和文本。然后,我们分析查询之间的性能差异。我们的结果表明,在提示中存在事实模式对于COT的成功并不是必需的。尽管如此,我们从经验上表明,仅依靠模式也不足以获得高质量的结果。我们认为文本具有常识性知识和意义。我们详尽的经验分析提供了定性的例子,说明了文本和模式之间的共生关系。这种对COT的系统理解使我们能够设计简洁的思想链,被称为CCOT,在其中修剪文本和模式只能保留其关键角色,同时以PAR或更高的求解任务率交付。
translated by 谷歌翻译